Skip to content

feat(dropbox): move media upload processing from LV jobs to cron task#534

Merged
smarcet merged 5 commits intomainfrom
feature/cron-media-upload-processing
Apr 22, 2026
Merged

feat(dropbox): move media upload processing from LV jobs to cron task#534
smarcet merged 5 commits intomainfrom
feature/cron-media-upload-processing

Conversation

@smarcet
Copy link
Copy Markdown
Collaborator

@smarcet smarcet commented Apr 21, 2026

ref: https://app.clickup.com/t/86b9hh9y6

Summary by CodeRabbit

  • New Features

    • Scheduled background processing for pending media uploads with retry/error tracking
    • Dropbox OAuth2 token auto-refresh and improved upload client with rate-limit aware retries and jitter
    • Console commands: process pending uploads, reconcile uploads, and Dropbox token diagnostic
  • Bug Fixes / Maintenance

    • Persist pending uploads for robust retrying, cleanup, and safer deletion flows
  • Database

    • New table and migration to store pending media uploads and link to media records
  • Tests

    • New unit tests covering pending-upload processing and Dropbox retry behavior

feat(dropbox-client): manage http 429 on upload
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 21, 2026

Warning

Rate limit exceeded

@smarcet has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 45 minutes and 45 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 45 minutes and 45 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 69cd1f9e-8a66-42ab-af59-5b330e110b34

📥 Commits

Reviewing files that changed from the base of the PR and between 6f0c60f and 4f07f71.

📒 Files selected for processing (9)
  • app/Console/Commands/ProcessPendingMediaUploadsCommand.php
  • app/Console/Commands/ReconcileMediaUploadsCommand.php
  • app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
  • app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
  • app/Services/Model/Imp/PresentationService.php
  • app/Services/Model/Imp/SummitService.php
  • tests/Unit/Services/ProcessPendingMediaUploadsTest.php
  • tests/Unit/Services/RetryAfterDropboxClientTest.php
📝 Walkthrough

Walkthrough

Adds a DB-backed PendingMediaUpload workflow: new entity, repository, migrations, service processing, scheduled Artisan command(s), Dropbox token auto-refresh and 429-aware client, provider/kernel wiring, config changes, and unit tests.

Changes

Cohort / File(s) Summary
Console commands & scheduler
app/Console/Commands/ProcessPendingMediaUploadsCommand.php, app/Console/Commands/ReconcileMediaUploadsCommand.php, app/Console/Commands/TestDropboxTokenCommand.php, app/Console/Kernel.php
New Artisan commands (summit:process-pending-media-uploads, summit:reconcile-media-uploads {summit_id} {media_upload_type_id?}, dropbox:test-token) with logging/error handling. Kernel registers commands and schedules summit:process-pending-media-uploads every minute with withoutOverlapping/onOneServer.
Domain model & migrations
app/Models/Foundation/Summit/PendingMediaUpload.php, database/migrations/model/Version20260421150000.php, database/migrations/model/Version20260421200000.php
New Doctrine entity PendingMediaUpload (statuses, fields, relation to PresentationMediaUpload) and two migrations: initial table plus addition of PresentationMediaUploadID FK/index.
Repository & DI bindings
app/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.php, app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php, app/Repositories/RepositoriesProvider.php
Added repository interface and Doctrine implementation with methods: getPendingUploads(), resetStuckProcessingRows(), deleteCompletedOlderThan(), deleteByMediaUpload(). RepositoriesProvider binds the interface to the Doctrine repo.
Summit & Presentation services
app/Services/Model/ISummitService.php, app/Services/Model/Imp/SummitService.php, app/Services/Model/Imp/PresentationService.php
ISummitService adds processPendingMediaUploads(). PresentationService persists PendingMediaUpload rows instead of dispatching ProcessMediaUpload jobs. SummitService implements processing loop (reset stuck, mark processing, download, upload to public/private storage, mark completed/error, attempts and cleanup). Constructor signatures updated to inject repository.
Dropbox token & client
app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php, app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php, app/Services/FileSystem/Dropbox/DropboxServiceProvider.php, app/Services/FileSystem/Dropbox/DropboxAdapter.php, config/filesystems.php
New AutoRefreshingDropBoxTokenService (refresh-token exchange and Spatie refresh provider) and RetryAfterDropboxClient (chunk upload retry logic with 429/Retry-After+jitter and stream rewind). Service provider switched to use token/provider and retry-aware client; DropboxAdapter logs detailed BadRequest responses; filesystems config extended with refresh_token/app_key/app_secret.
Jobs & other minor
app/Jobs/ProcessMediaUpload.php
Marked ProcessMediaUpload job as @deprecated in PHPDoc only.
Tests
tests/Unit/Services/PresentationServiceMediaUploadTest.php, tests/Unit/Services/ProcessPendingMediaUploadsTest.php, tests/Unit/Services/RetryAfterDropboxClientTest.php
New unit tests for PresentationService persisting PendingMediaUpload, SummitService processing flows (max retries, transient failures, cleanup, stuck reset), and RetryAfterDropboxClient 429/retry/sleep behaviors.
Misc imports/formatting
app/Repositories/RepositoriesProvider.php, provider wiring files
Small import/whitespace adjustments and DI/ provider wiring changes to register new repository and Dropbox client/token wiring.

Sequence Diagram(s)

sequenceDiagram
    participant Scheduler as Scheduler/Cron
    participant Cmd as ProcessPendingMediaUploadsCommand
    participant Svc as SummitService
    participant Repo as PendingMediaUploadRepository
    participant DB as Database
    participant FileUtil as FileUtils
    participant StoragePub as PublicStorageStrategy
    participant StoragePriv as PrivateStorageStrategy
    participant Dropbox as RetryAfterDropboxClient

    Scheduler->>Cmd: trigger
    activate Cmd
    Cmd->>Svc: processPendingMediaUploads(max_retries)
    deactivate Cmd

    activate Svc
    Svc->>Repo: resetStuckProcessingRows()
    Repo->>DB: UPDATE STATUS_PROCESSING -> STATUS_PENDING
    DB-->>Repo: rows_affected

    Svc->>Repo: getPendingUploads()
    Repo->>DB: SELECT WHERE status = STATUS_PENDING
    DB-->>Repo: pending_list

    loop for each pending upload
        Svc->>Repo: set STATUS_PROCESSING, increment attempts
        Repo->>DB: persist change
        Svc->>FileUtil: download(publicPath) -> tempFile
        FileUtil-->>Svc: tempFilePath
        Svc->>StoragePub: save(tempFile) -> publicPath
        Svc->>StoragePriv: save(tempFile) -> privatePath
        StoragePriv->>Dropbox: upload (429-aware retry)
        alt upload success
            Svc->>Repo: set STATUS_COMPLETED, set processed_date
        else error
            Svc->>Repo: set STATUS_ERROR or STATUS_PENDING, set error_message
        end
        Svc->>FileUtil: cleanup tempFile / remote temp
    end

    Svc->>Repo: deleteCompletedOlderThan(days=7)
    Repo->>DB: DELETE ...
    DB-->>Repo: deleted_count

    Svc-->>Cmd: return stats {processed, errors}
    deactivate Svc
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • romanetar
  • JpMaxMan

Poem

🐰 I hopped through rows and files today,
I queued no jobs — I stored the way.
Dropbox refreshed, retried with care,
Chunks rewound and back in the air.
A crunchy carrot for each processed play. 🥕

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main architectural change: moving media upload processing from Laravel jobs to a scheduled cron task, which aligns with the comprehensive changeset.
Docstring Coverage ✅ Passed Docstring coverage is 82.95% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/cron-media-upload-processing

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 18

🧹 Nitpick comments (2)
app/Jobs/ProcessMediaUpload.php (1)

26-30: Consider marking for concrete removal timing.

The deprecation note is helpful, but without a target version or removal ticket it's easy to lose track of. If you keep the job around for in-flight queue drainage, mention the cutoff commit/deployment (or link the ClickUp task) so the future cleanup can be scheduled deterministically.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Jobs/ProcessMediaUpload.php` around lines 26 - 30, Update the deprecation
block on the ProcessMediaUpload class to include a concrete removal target
(e.g., a specific release version or commit hash) or a tracking ticket ID so
future cleanup is deterministic; locate the docblock above the
ProcessMediaUpload class definition and replace the vague “Can be removed in a
future cleanup” line with a precise marker such as “REMOVE_BY: vX.Y.Z / commit
<hash> / ClickUp `#12345`” and, if relevant, add a short note that it remains only
for draining in-flight queue jobs.
app/Services/Model/Imp/PresentationService.php (1)

1153-1162: Consider extracting pending-upload creation into one helper.

The PendingMediaUpload field mapping is duplicated in add/update paths; a private helper would keep future fields/status changes consistent.

♻️ Proposed refactor shape
+    private function createPendingMediaUpload(
+        Summit $summit,
+        $mediaUploadType,
+        PresentationMediaUpload $mediaUpload,
+        FileUploadInfo $fileInfo
+    ): void {
+        $pendingUpload = new PendingMediaUpload();
+        $pendingUpload->setSummitId($summit->getId());
+        $pendingUpload->setMediaUploadTypeId($mediaUploadType->getId());
+        $pendingUpload->setPublicPath($mediaUpload->getPath(IStorageTypesConstants::PublicType));
+        $pendingUpload->setPrivatePath($mediaUpload->getPath(IStorageTypesConstants::PrivateType));
+        $pendingUpload->setFileName($fileInfo->getFileName());
+        $pendingUpload->setTempFilePath($fileInfo->getFilePath());
+        $pendingUpload->setStatus(PendingMediaUpload::STATUS_PENDING);
+        $this->pending_media_upload_repository->add($pendingUpload);
+    }

Also applies to: 1277-1286

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Services/Model/Imp/PresentationService.php` around lines 1153 - 1162, The
PendingMediaUpload population is duplicated; extract a private helper (e.g.,
buildPendingMediaUpload or createPendingMediaUpload) that accepts parameters
like $summit, $mediaUploadType, $mediaUpload, $fileInfo and returns a fully
populated PendingMediaUpload (calls setSummitId, setMediaUploadTypeId,
setPublicPath, setPrivatePath, setFileName, setTempFilePath, setStatus). Replace
the inline construction in the current add path and the other occurrence (the
add/update block that also calls pending_media_upload_repository->add) with
calls to this helper and then call
pending_media_upload_repository->add($pendingUpload) as before so all field
mappings remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/Console/Commands/ProcessPendingMediaUploadsCommand.php`:
- Around line 66-87: The handle method in ProcessPendingMediaUploadsCommand
currently swallows failures by not returning an exit code; update handle (the
method that calls $this->summit_service->processPendingMediaUploads()) to
explicitly return 0 on success (after the info log that prints $delta and
$stats) and return 1 in the catch block (after calling Log::warning($ex) and
$this->error($ex->getMessage())) so the Artisan command signals failure to
schedulers/cron.

In `@app/Console/Commands/ReconcileMediaUploadsCommand.php`:
- Around line 65-103: Change the handle() signature to return int and ensure the
method returns 0 on success and 1 on failure: add a return type declaration to
handle(): int, then after the successful completion (after the final
$this->info(...) call) return 0; and inside the catch (Exception $ex) block,
after logging and $this->error(...), return 1; (references: handle(),
reconcileMediaUploadsToPrivateStorage(), $this->info(), $this->error(), the
catch block that currently logs $ex).

In `@app/Console/Commands/TestDropboxTokenCommand.php`:
- Around line 55-63: The code prints a portion of the Dropbox access token
(accessed via $tokenService->getToken() into $accessToken) which can leak
secrets; remove any logging of the token or its prefix and instead log a
non-sensitive success message (e.g., "Access token obtained successfully") in
the TestDropboxTokenCommand handling code, eliminating the substr($accessToken,
0, 12) usage and any direct or partial token output while preserving the
empty-token check and exit behavior.

In `@app/Console/Kernel.php`:
- Around line 58-59: The schedule for ProcessPendingMediaUploadsCommand uses
->withoutOverlapping()->onOneServer() at 5-minute cadence but a stuck chunk
(given RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 and
maxUploadChunkRetries = 5) can hold the lock much longer; update the scheduler
invocation for ProcessPendingMediaUploadsCommand (and the other affected
scheduled entry) to pass an explicit expiry to withoutOverlapping (e.g.,
->withoutOverlapping(1800)) so the lock will expire after a safe upper bound (30
minutes) if a run is killed mid-sleep.

In `@app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php`:
- Around line 68-82: resetStuckProcessingRows currently uses p.last_edited to
detect stale STATUS_PROCESSING rows (in resetStuckProcessingRows) which can
reclaim rows still being processed; add a durable in-flight marker instead of
relying on last_edited: introduce a ProcessingStartedAt (datetime) column on the
PendingMediaUpload model, set ProcessingStartedAt when
processPendingMediaUploads transitions a row to
PendingMediaUpload::STATUS_PROCESSING, and change resetStuckProcessingRows to
compare ProcessingStartedAt against the stale threshold (or alternatively use a
DB-level advisory lock check) so only genuinely abandoned jobs are reverted;
update any place that sets STATUS_PROCESSING (e.g., processPendingMediaUploads)
to populate/clear ProcessingStartedAt and document the new assumption if you
prefer the lighter documentation-only approach.
- Around line 49-52: getOrderMappings() currently returns an indexed array which
breaks lookups in Order::apply2Query() because it expects keys by field name;
update DoctrinePendingMediaUploadRepository::getOrderMappings() to return an
associative array mapping sortable field names (e.g., 'id', 'created') to their
corresponding DQL/column expressions (the same shape as getFilterMappings() in
other Doctrine repositories) so that isset($mappings[$order->getField()])
succeeds and ordering is applied.
- Around line 87-101: The deleteCompletedOlderThan method currently uses
setMaxResults on a DQL DELETE which is ignored; change the implementation in
deleteCompletedOlderThan (involving getEntityManager and PendingMediaUpload) to
first SELECT the IDs of PendingMediaUpload matching status =
PendingMediaUpload::STATUS_COMPLETED and processed_date < $cutoff_date with
setMaxResults($limit), collect those IDs, then run a DELETE WHERE id IN (...)
(or use a parameterized DQL delete by id list) to remove only that batch; ensure
you handle the empty-ID case (return 0) and keep using the same cutoff_date
logic and return the number of deleted rows.

In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php`:
- Around line 37-44: The constructor currently calls refreshAccessToken() but
does nothing if that initial refresh fails and the underlying HTTP call has no
timeout; change __construct to propagate failure (throw a specific exception)
when refreshAccessToken() cannot obtain a token so the service fails fast rather
than living with an empty token, and update refreshAccessToken() to set a
bounded timeout on the external token request (e.g., via your HTTP client
options) and handle/throw errors on non-200 responses; also ensure getToken()
returns a valid token or throws if none is available.

In `@app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php`:
- Line 37: The DEFAULT_RETRY_AFTER_SECONDS constant in RetryAfterDropboxClient
is set to 300s which is too high and conflicts with tests expecting a 1s
default; change the value to a lower sensible default (e.g., 30 or 60 seconds)
to match Dropbox guidance and test expectations, update
RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS accordingly, and adjust any
related tests in RetryAfterDropboxClientTest.php if they rely on a specific
default so they remain consistent with the new constant.
- Around line 83-84: The code currently casts the Retry-After header string
directly to int which yields 0 for HTTP-date values; change the logic in
RetryAfterDropboxClient (where $retryAfter is computed from
$exception->getResponse()->getHeaderLine('Retry-After')) to first check if the
header is numeric and use (int) when it is, otherwise parse it as an HTTP-date
(e.g. via strtotime) and compute seconds = max(0, parsed_timestamp - time()); if
parsing fails or results in non-positive value fall back to
self::DEFAULT_RETRY_AFTER_SECONDS, and ensure $retryAfter is an int before use.
- Around line 35-104: The tests expect 429 Retry-After behavior for direct
endpoint calls, but RetryAfterDropboxClient only implements that logic in
uploadChunk; add matching retry-after + jitter handling to the public methods
contentEndpointRequest and rpcEndpointRequest in RetryAfterDropboxClient (and
reuse the same constants and jitter logic used in uploadChunk:
DEFAULT_RETRY_AFTER_SECONDS, MIN_JITTER_MS, MAX_JITTER_MS) so they catch
RequestException/ClientException, read Retry-After, sleep with jitter, rewind
any stream if needed, and retry up to the same $this->maxUploadChunkRetries
semantics; alternatively, if that behavior is not desired, remove/adjust the
failing test cases in tests/Unit/Services/RetryAfterDropboxClientTest.php
instead of changing the class.

In `@app/Services/Model/Imp/SummitService.php`:
- Around line 4332-4334: The catch block in SummitService.php that currently
does "catch (\Exception $ex) { Log::error($ex); }" swallows processor-level
failures; after logging the exception you must re-throw it so the calling
command can fail. Edit the catch in the relevant method inside the SummitService
class to call Log::error(...) and then throw $ex (or rethrow using throw;) so
setup/query/reset failures propagate instead of returning success.
- Around line 4291-4298: The temp file cleanup is currently executed inside the
$this->tx_service->transaction(...) closure (where PendingMediaUpload status is
set), which runs before the DB commit; move the call to
\Libs\Utils\FileUtils::cleanLocalAndRemoteFile($localPath,
$pending_upload->getTempFilePath()) out of the transaction closure so it only
runs after the transaction completes successfully—i.e., keep the status update
and setProcessedDate(...) inside the transaction (using
$this->tx_service->transaction and the PendingMediaUpload methods),
capture/return any needed identifiers or paths from the closure, then call
FileUtils::cleanLocalAndRemoteFile(...) afterward and handle/log errors from
cleanup without rolling back DB changes.
- Around line 4244-4316: The exception handler currently marks every failed
upload as PendingMediaUpload::STATUS_ERROR and cleans both local and remote temp
files, preventing retries; update the catch block in
SummitService::processPendingMediaUploads to check
$pending_upload->getAttempts() (or equivalent) against its max retries (e.g.
$pending_upload->getMaxRetries()) and if attempts < max then set status back to
PendingMediaUpload::STATUS_PENDING (instead of STATUS_ERROR) and only delete the
local temp copy (call FileUtils::cleanLocalFile($localPath) or equivalent)
inside the tx_service->transaction; if attempts >= max, then set STATUS_ERROR
and setErrorMessage($ex->getMessage()) and clean both local and remote temp
files as before.

In `@tests/Unit/Services/PresentationServiceMediaUploadTest.php`:
- Around line 49-193: The tests currently only set up mocks but never call
PresentationService::addMediaUploadTo or ::updateMediaUploadFrom so the
persist()->once() expectations and the important regression check
(ProcessMediaUpload::dispatch not called) are never exercised; update the tests
to instantiate (or partially mock) PresentationService so you actually invoke
addMediaUploadTo(...) and updateMediaUploadFrom(...) with the prepared
$presentation, $fileInfo, $mediaUpload, and tx_service, ensure
tx_service->transaction() returnsUsing the callback while
Registry::getManager('model') returns your $em so the $em->persist(...)
expectation is met, and explicitly mock/spy ProcessMediaUpload::dispatch (or the
dispatching mechanism used) to assert it is never called in
testProcessMediaUploadJobNotDispatched; alternatively, if you cannot construct
the service, remove these placeholder tests and consolidate into a single
integration test that verifies DB row creation and absence of dispatch.

In `@tests/Unit/Services/ProcessPendingMediaUploadsTest.php`:
- Around line 46-287: The tests mock EntityManager and expect raw DQL calls, but
SummitService::processPendingMediaUploads() uses IPendingMediaUploadRepository
methods (resetStuckProcessingRows, getPendingUploads, deleteCompletedOlderThan)
and private properties (pending_media_upload_repository, summit_repository) plus
a protected tx_service; update the tests to mock IPendingMediaUploadRepository
instead of EM, set expectations on resetStuckProcessingRows(10),
getPendingUploads(), and deleteCompletedOlderThan(7,1000) and return the
appropriate PendingMediaUpload mocks, and inject that mock into the
SummitService instance; also set the protected tx_service and private
summit_repository/pending_media_upload_repository on the real SummitService
object using reflection (or the service constructor/setters if available) so the
service reads the injected mocks during processPendingMediaUploads(), and remove
all expectations on $em->createQuery(...) / DQL patterns.

In `@tests/Unit/Services/RetryAfterDropboxClientTest.php`:
- Around line 142-147: The test is asserting sleep counts after a call that is
expected to throw, so the assertion is never reached; wrap the throwing call in
a try/finally so the finally block always runs and performs the Sleep
assertions. Specifically, in
RetryAfterDropboxClientTest::testContentEndpointRequest wrap the
$client->contentEndpointRequest(...) invocation in try { ... } finally {
Sleep::assertSleptTimes(4); } (keep the existing
$this->expectException(ClientException::class) outside/above the try), and do
the same pattern in testContentEndpointRequestNon429Exception replacing the
finally assertion with Sleep::assertNeverSlept(); ensure you do not catch and
swallow the exception (do not add a catch that prevents rethrow).
- Around line 45-227: The tests call
RetryAfterDropboxClient::contentEndpointRequest but that method is not
overridden to implement 429 retry/sleep logic (only uploadChunk is), and the
tests also assume a 1s default whereas DEFAULT_RETRY_AFTER_SECONDS is 300;
update the tests to target the actual retrying code or align constants: either
(A) add an override of contentEndpointRequest in RetryAfterDropboxClient that
implements the same retry/Retry-After + jitter behavior (using a 1-second
default constant if you intend the tests to expect ~1s) and then keep the
current tests, or (B) change the tests to exercise uploadChunk (create the
proper stream/chunk inputs) and assert sleeps using DEFAULT_RETRY_AFTER_SECONDS
(300s) plus jitter ranges; reference contentEndpointRequest, uploadChunk,
RetryAfterDropboxClient, and DEFAULT_RETRY_AFTER_SECONDS to locate where to
implement or adjust the tests and constants.

---

Nitpick comments:
In `@app/Jobs/ProcessMediaUpload.php`:
- Around line 26-30: Update the deprecation block on the ProcessMediaUpload
class to include a concrete removal target (e.g., a specific release version or
commit hash) or a tracking ticket ID so future cleanup is deterministic; locate
the docblock above the ProcessMediaUpload class definition and replace the vague
“Can be removed in a future cleanup” line with a precise marker such as
“REMOVE_BY: vX.Y.Z / commit <hash> / ClickUp `#12345`” and, if relevant, add a
short note that it remains only for draining in-flight queue jobs.

In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1153-1162: The PendingMediaUpload population is duplicated;
extract a private helper (e.g., buildPendingMediaUpload or
createPendingMediaUpload) that accepts parameters like $summit,
$mediaUploadType, $mediaUpload, $fileInfo and returns a fully populated
PendingMediaUpload (calls setSummitId, setMediaUploadTypeId, setPublicPath,
setPrivatePath, setFileName, setTempFilePath, setStatus). Replace the inline
construction in the current add path and the other occurrence (the add/update
block that also calls pending_media_upload_repository->add) with calls to this
helper and then call pending_media_upload_repository->add($pendingUpload) as
before so all field mappings remain consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fdba4aff-1958-43af-9cb2-0eff9771604d

📥 Commits

Reviewing files that changed from the base of the PR and between 412e56d and 27d6008.

📒 Files selected for processing (20)
  • app/Console/Commands/ProcessPendingMediaUploadsCommand.php
  • app/Console/Commands/ReconcileMediaUploadsCommand.php
  • app/Console/Commands/TestDropboxTokenCommand.php
  • app/Console/Kernel.php
  • app/Jobs/ProcessMediaUpload.php
  • app/Models/Foundation/Summit/PendingMediaUpload.php
  • app/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.php
  • app/Repositories/RepositoriesProvider.php
  • app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
  • app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php
  • app/Services/FileSystem/Dropbox/DropboxServiceProvider.php
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
  • app/Services/Model/ISummitService.php
  • app/Services/Model/Imp/PresentationService.php
  • app/Services/Model/Imp/SummitService.php
  • config/filesystems.php
  • database/migrations/model/Version20260421150000.php
  • tests/Unit/Services/PresentationServiceMediaUploadTest.php
  • tests/Unit/Services/ProcessPendingMediaUploadsTest.php
  • tests/Unit/Services/RetryAfterDropboxClientTest.php

Comment thread app/Console/Commands/ProcessPendingMediaUploadsCommand.php Outdated
Comment thread app/Console/Commands/ReconcileMediaUploadsCommand.php Outdated
Comment on lines +55 to +63
$accessToken = $tokenService->getToken();

if (empty($accessToken)) {
$this->error(' Token service returned an empty access token.');
return 1;
}

$this->info(' Access token obtained: ' . substr($accessToken, 0, 12) . '...');

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Do not print access-token material.

Even a prefix can leak sensitive token metadata into shell history or CI logs.

🛡️ Proposed fix
-        $this->info('   Access token obtained: ' . substr($accessToken, 0, 12) . '...');
+        $this->info('   Access token obtained.');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$accessToken = $tokenService->getToken();
if (empty($accessToken)) {
$this->error(' Token service returned an empty access token.');
return 1;
}
$this->info(' Access token obtained: ' . substr($accessToken, 0, 12) . '...');
$accessToken = $tokenService->getToken();
if (empty($accessToken)) {
$this->error(' Token service returned an empty access token.');
return 1;
}
$this->info(' Access token obtained.');
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Console/Commands/TestDropboxTokenCommand.php` around lines 55 - 63, The
code prints a portion of the Dropbox access token (accessed via
$tokenService->getToken() into $accessToken) which can leak secrets; remove any
logging of the token or its prefix and instead log a non-sensitive success
message (e.g., "Access token obtained successfully") in the
TestDropboxTokenCommand handling code, eliminating the substr($accessToken, 0,
12) usage and any direct or partial token output while preserving the
empty-token check and exit behavior.

Comment thread app/Console/Kernel.php
Comment on lines +58 to +59
\App\Console\Commands\ProcessPendingMediaUploadsCommand::class,
\App\Console\Commands\TestDropboxTokenCommand::class,
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Scheduling looks correct; one consideration on overlap timing.

withoutOverlapping()->onOneServer() at a 5-minute cadence is appropriate. Note that with the current RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 combined with maxUploadChunkRetries = 5, a single stuck chunk could hold the cron lock for ~25 minutes per invocation. Consider passing ->withoutOverlapping($expiresAt) with an explicit expiry (e.g., 30 minutes) to avoid a stale lock if a run is killed mid-sleep. Related: see the comment on DEFAULT_RETRY_AFTER_SECONDS in RetryAfterDropboxClient.php.

Also applies to: 110-110

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Console/Kernel.php` around lines 58 - 59, The schedule for
ProcessPendingMediaUploadsCommand uses ->withoutOverlapping()->onOneServer() at
5-minute cadence but a stuck chunk (given
RetryAfterDropboxClient::DEFAULT_RETRY_AFTER_SECONDS = 300 and
maxUploadChunkRetries = 5) can hold the lock much longer; update the scheduler
invocation for ProcessPendingMediaUploadsCommand (and the other affected
scheduled entry) to pass an explicit expiry to withoutOverlapping (e.g.,
->withoutOverlapping(1800)) so the lock will expire after a safe upper bound (30
minutes) if a run is killed mid-sleep.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this run on one single k8 pod

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarcet, understood — with a single pod onOneServer() is effectively a no-op for lock contention between instances. The only residual edge case would be if the pod is evicted or OOM-killed mid-run: Laravel's default withoutOverlapping() lock TTL is 24 hours, so subsequent cron ticks would be skipped until it naturally expires. If that scenario is acceptable for this workload, the current setup is fine. Happy to close this comment if you're satisfied with the behaviour.

ʕ •ᴥ•ʔ

Comment thread app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
Comment thread app/Services/Model/Imp/SummitService.php
Comment thread tests/Unit/Services/PresentationServiceMediaUploadTest.php
Comment thread tests/Unit/Services/ProcessPendingMediaUploadsTest.php Outdated
Comment thread tests/Unit/Services/RetryAfterDropboxClientTest.php Outdated
Comment on lines +142 to +147
$this->expectException(ClientException::class);
$client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body');

// Verify sleep was called 4 times (not on the 5th/final attempt before throwing)
Sleep::assertSleptTimes(4);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Sleep::assertSleptTimes(4) after a throwing call is unreachable.

$client->contentEndpointRequest(...) on line 143 is expected to throw (expectException(ClientException::class) on line 142). Once it throws, no statement on lines 145–146 in the same test method executes, so Sleep::assertSleptTimes(4) never runs — the test only verifies that the exception propagates, not that 4 sleeps occurred.

🛠️ Move the assertion into a try/finally so it actually runs
-        $this->expectException(ClientException::class);
-        $client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body');
-
-        // Verify sleep was called 4 times (not on the 5th/final attempt before throwing)
-        Sleep::assertSleptTimes(4);
+        try {
+            $client->contentEndpointRequest('/test/endpoint', ['arg' => 'value'], 'body');
+            $this->fail('Expected ClientException was not thrown');
+        } catch (ClientException $e) {
+            // Verify sleep was called 4 times (not on the 5th/final attempt before throwing)
+            Sleep::assertSleptTimes(4);
+        }

The same pattern applies to testContentEndpointRequestNon429Exception (line 177–181), though assertNeverSlept() happens to be vacuously true there.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/Unit/Services/RetryAfterDropboxClientTest.php` around lines 142 - 147,
The test is asserting sleep counts after a call that is expected to throw, so
the assertion is never reached; wrap the throwing call in a try/finally so the
finally block always runs and performs the Sleep assertions. Specifically, in
RetryAfterDropboxClientTest::testContentEndpointRequest wrap the
$client->contentEndpointRequest(...) invocation in try { ... } finally {
Sleep::assertSleptTimes(4); } (keep the existing
$this->expectException(ClientException::class) outside/above the try), and do
the same pattern in testContentEndpointRequestNon429Exception replacing the
finally assertion with Sleep::assertNeverSlept(); ensure you do not catch and
swallow the exception (do not add a catch that prevents rethrow).

@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

1 similar comment
@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

@smarcet smarcet force-pushed the feature/cron-media-upload-processing branch from 646134d to f8174ae Compare April 21, 2026 21:57
@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

@smarcet smarcet force-pushed the feature/cron-media-upload-processing branch from f8174ae to 62d4c72 Compare April 21, 2026 22:05
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
app/Models/Foundation/Summit/PendingMediaUpload.php (2)

75-79: Consider preventing duplicate in-flight rows at the DB level.

Nothing in the entity or table prevents two STATUS_PENDING/STATUS_PROCESSING rows from existing for the same PresentationMediaUploadID (see the updateMediaUploadFrom flow). A partial/functional unique constraint is not supported in MySQL, but you can approximate it by deleting prior pending rows on every create (as suggested in PresentationService), or by adding a unique index on PresentationMediaUploadID and reusing one row via upsert. Purely a defense-in-depth suggestion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Models/Foundation/Summit/PendingMediaUpload.php` around lines 75 - 79,
The entity allows multiple in-flight rows for the same
PresentationMediaUploadID, which can cause duplicate pending/processing entries;
update the PendingMediaUpload creation flow to enforce uniqueness by either 1)
deleting any existing rows for the same PresentationMediaUploadID with status in
{STATUS_PENDING, STATUS_PROCESSING} before inserting a new PendingMediaUpload
(adjust the create method used by PresentationService/updateMediaUploadFrom to
perform the delete-and-insert), or 2) add a unique index on
PresentationMediaUploadID at the DB level and change the create path to perform
an upsert/reuse of the existing PendingMediaUpload row (modify the
repository/save logic to handle update-on-conflict); reference the
PendingMediaUpload entity, its $status field and constants
STATUS_PENDING/STATUS_PROCESSING, and the PresentationService
updateMediaUploadFrom flow when making the change.

22-30: Consider a composite (Status, Created) index for the cron query pattern.

DoctrinePendingMediaUploadRepository::getPendingUploads() does WHERE p.status = :pending ORDER BY p.created ASC. The current IDX_Status single-column index filters on status but leaves the ORDER BY to a filesort once the STATUS_PENDING set grows. A composite index on (Status, Created) would allow index-ordered scans and is a near-free win for this hot path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Models/Foundation/Summit/PendingMediaUpload.php` around lines 22 - 30,
The query in DoctrinePendingMediaUploadRepository::getPendingUploads() filters
by status and orders by created, so replace the current single-column ORM\Index
(name: IDX_Status) on Status in the PendingMediaUpload entity with a composite
index on (Status, Created) to enable index-ordered scans; update the ORM\Index
annotation on class PendingMediaUpload to include both columns (Status and
Created) and adjust the index name (e.g., IDX_Status_Created) so the DB uses the
composite index for WHERE p.status = :pending ORDER BY p.created ASC.
database/migrations/model/Version20260421200000.php (1)

40-45: down() is not idempotent and will error if re-run or if the column was never added.

If up() partially applied (e.g., column added but FK failed), re-running down() may fail on the first DROP FOREIGN KEY. Consider guarding with existence checks via $schema (which is already passed in and currently unused — PHPMD warning is correct here even if we typically ignore it). Low priority; Doctrine migrations usually run atomically, so this is optional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@database/migrations/model/Version20260421200000.php` around lines 40 - 45,
The down() migration unconditionally drops a foreign key, index and column and
will error if re-run or if those objects don't exist; update down(Schema
$schema) to first check $schema->hasTable('PendingMediaUpload') then $table =
$schema->getTable('PendingMediaUpload') and guard each operation with existence
checks (e.g.
$table->hasForeignKey('FK_PendingMediaUpload_PresentationMediaUpload'),
$table->hasIndex('IDX_PresentationMediaUploadID'),
$table->hasColumn('PresentationMediaUploadID')) before calling addSql to DROP
FOREIGN KEY FK_PendingMediaUpload_PresentationMediaUpload, DROP INDEX
IDX_PresentationMediaUploadID, and DROP COLUMN PresentationMediaUploadID so the
method becomes idempotent and safe if partially applied or re-run.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php`:
- Around line 107-117: The bulk DQL DELETE in deleteByMediaUpload currently
binds a PresentationMediaUpload entity to p.media_upload which fails because
DELETE only accepts scalar identifiers; change the DQL to use a scalar parameter
(e.g., :mediaUploadId) and set that parameter to $mediaUpload->getId(), and keep
the statuses parameter as-is (refer to deleteByMediaUpload,
PresentationMediaUpload, and PendingMediaUpload::STATUS_* for locating the code
and constants).

In `@app/Services/FileSystem/Dropbox/DropboxAdapter.php`:
- Around line 38-39: Remove the raw exception log inside DropboxAdapter::getUrl
by deleting the Log::warning($ex) call and keep only the structured
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(),
$ex->dropboxCode)); ensure the single remaining warning provides the descriptive
message and Dropbox error code; if you want richer context later, add the
exception as a context array (e.g., Log::warning($message, ['exception' =>
$ex])) in a separate refactor.

In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1343-1347: There is a race between deleteMediaUpload
(markAsDeleted + pending_media_upload_repository->deleteByMediaUpload +
$presentation->removeMediaUpload) and an in-flight cron upload; to fix, in
deleteMediaUpload first query pending_media_upload_repository for rows for this
mediaUpload and: if any STATUS_PENDING, remove their temp_file_path(s)
explicitly and then delete those rows before marking storage deleted; if any
STATUS_PROCESSING, do not immediately markAsDeleted but instead set a
cooperative cancel flag on those rows (e.g.,
pending_media_upload_repository->setCancelled or add a cancelled boolean) so the
cron can check between its transition to STATUS_PROCESSING and the actual copy
and abort/clean up, and ensure the cron worker checks this cancel flag and
removes temp files when it aborts; alternatively, take a DB row lock around the
pending row check+delete to prevent the cron from progressing while you perform
the delete; update resetStuckProcessingRows/cron logic to respect the new cancel
flag and to remove temp_file_path for pending rows.
- Around line 1278-1289: Before inserting the new PendingMediaUpload row, remove
any existing pending/processing rows for the same PresentationMediaUpload to
prevent racing/supersede issues: call the repository method that deletes by
media upload (e.g.
$this->pending_media_upload_repository->deleteByMediaUpload($mediaUpload) or a
dedicated supersede method) just prior to creating/adding the new
PendingMediaUpload in the block that builds the $pendingUpload (this addresses
existing PendingMediaUpload rows with STATUS_PENDING or STATUS_PROCESSING and
ensures only the fresh tempFilePath/fileName will be processed).

---

Nitpick comments:
In `@app/Models/Foundation/Summit/PendingMediaUpload.php`:
- Around line 75-79: The entity allows multiple in-flight rows for the same
PresentationMediaUploadID, which can cause duplicate pending/processing entries;
update the PendingMediaUpload creation flow to enforce uniqueness by either 1)
deleting any existing rows for the same PresentationMediaUploadID with status in
{STATUS_PENDING, STATUS_PROCESSING} before inserting a new PendingMediaUpload
(adjust the create method used by PresentationService/updateMediaUploadFrom to
perform the delete-and-insert), or 2) add a unique index on
PresentationMediaUploadID at the DB level and change the create path to perform
an upsert/reuse of the existing PendingMediaUpload row (modify the
repository/save logic to handle update-on-conflict); reference the
PendingMediaUpload entity, its $status field and constants
STATUS_PENDING/STATUS_PROCESSING, and the PresentationService
updateMediaUploadFrom flow when making the change.
- Around line 22-30: The query in
DoctrinePendingMediaUploadRepository::getPendingUploads() filters by status and
orders by created, so replace the current single-column ORM\Index (name:
IDX_Status) on Status in the PendingMediaUpload entity with a composite index on
(Status, Created) to enable index-ordered scans; update the ORM\Index annotation
on class PendingMediaUpload to include both columns (Status and Created) and
adjust the index name (e.g., IDX_Status_Created) so the DB uses the composite
index for WHERE p.status = :pending ORDER BY p.created ASC.

In `@database/migrations/model/Version20260421200000.php`:
- Around line 40-45: The down() migration unconditionally drops a foreign key,
index and column and will error if re-run or if those objects don't exist;
update down(Schema $schema) to first check
$schema->hasTable('PendingMediaUpload') then $table =
$schema->getTable('PendingMediaUpload') and guard each operation with existence
checks (e.g.
$table->hasForeignKey('FK_PendingMediaUpload_PresentationMediaUpload'),
$table->hasIndex('IDX_PresentationMediaUploadID'),
$table->hasColumn('PresentationMediaUploadID')) before calling addSql to DROP
FOREIGN KEY FK_PendingMediaUpload_PresentationMediaUpload, DROP INDEX
IDX_PresentationMediaUploadID, and DROP COLUMN PresentationMediaUploadID so the
method becomes idempotent and safe if partially applied or re-run.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ecb04d13-4052-4cb4-b60c-eaa3c4a6b572

📥 Commits

Reviewing files that changed from the base of the PR and between 27d6008 and f8174ae.

📒 Files selected for processing (8)
  • app/Console/Kernel.php
  • app/Models/Foundation/Summit/PendingMediaUpload.php
  • app/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.php
  • app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
  • app/Services/FileSystem/Dropbox/DropboxAdapter.php
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
  • app/Services/Model/Imp/PresentationService.php
  • database/migrations/model/Version20260421200000.php
✅ Files skipped from review due to trivial changes (1)
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
🚧 Files skipped from review as they are similar to previous changes (2)
  • app/Console/Kernel.php
  • app/Models/Foundation/Summit/Repositories/IPendingMediaUploadRepository.php

Comment thread app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
Comment on lines 38 to 39
Log::warning($ex);
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect Laravel/Monolog versions and existing exception logging style.

printf '\nComposer package versions:\n'
if [ -f composer.lock ]; then
  jq -r '.packages[] | select(.name=="laravel/framework" or .name=="monolog/monolog" or .name=="psr/log") | "\(.name) \(.version)"' composer.lock
fi

printf '\nNearby raw exception logging patterns:\n'
rg -n -C2 'Log::(debug|info|notice|warning|error|critical|alert|emergency)\(\$[a-zA-Z_][a-zA-Z0-9_]*\)' --type=php

Repository: OpenStackweb/summit-api

Length of output: 50380


Remove the redundant raw exception log on line 38.

Line 38 logs the exception object directly, which duplicates the more detailed warning on line 39. Remove the raw log and keep only the structured message with the Dropbox error code.

Proposed fix
-            Log::warning($ex);
             Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode));

Note: Raw exception logging is common throughout the codebase. If adopting structured exception context across the project is desired (e.g., Log::warning($message, ['exception' => $ex])), consider a broader refactoring initiative to ensure consistent patterns.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Log::warning($ex);
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode));
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(), $ex->dropboxCode));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Services/FileSystem/Dropbox/DropboxAdapter.php` around lines 38 - 39,
Remove the raw exception log inside DropboxAdapter::getUrl by deleting the
Log::warning($ex) call and keep only the structured
Log::warning(sprintf("DropboxAdapter::getUrl %s code %s", $ex->getMessage(),
$ex->dropboxCode)); ensure the single remaining warning provides the descriptive
message and Dropbox error code; if you want richer context later, add the
exception as a context array (e.g., Log::warning($message, ['exception' =>
$ex])) in a separate refactor.

Comment thread app/Services/Model/Imp/PresentationService.php
Comment thread app/Services/Model/Imp/PresentationService.php
@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

@smarcet smarcet requested a review from romanetar April 21, 2026 22:20
Copy link
Copy Markdown
Collaborator

@romanetar romanetar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarcet please review

Comment thread app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php Outdated
@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

@smarcet smarcet requested a review from romanetar April 22, 2026 14:34
@smarcet
Copy link
Copy Markdown
Collaborator Author

smarcet commented Apr 22, 2026

@romanetar please re review

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
app/Services/Model/Imp/PresentationService.php (2)

1281-1294: ⚠️ Potential issue | 🟠 Major

Don’t use row deletion as cancellation for in-flight uploads.

If an existing row is already STATUS_PROCESSING, the cron worker may still finish uploading the old temp file after this re-upload creates a new pending row. Use a cooperative cancel flag/status check or a DB lock around the processing transition and transfer.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Services/Model/Imp/PresentationService.php` around lines 1281 - 1294, The
current code unconditionally calls
pending_media_upload_repository->deleteByMediaUpload(...) which can remove a row
that is STATUS_PROCESSING and let the cron still finish the old upload; instead,
stop deleting rows and implement a cooperative cancel or safe transition: modify
the pending-media logic around PendingMediaUpload to set a cancel flag or status
(e.g., setStatus(PendingMediaUpload::STATUS_CANCELLED) or setCancelled(true))
for existing pending rows unless their status is STATUS_PROCESSING, and update
the cron worker's processing transition to acquire a DB lock or check-for-cancel
before and during transfer (use row versioning/for-update or a status check
against STATUS_PROCESSING/STATUS_CANCELLED) so in-flight jobs are not raced by
new uploads; replace deleteByMediaUpload(...) invocation with an update that
only cancels non-processing rows and ensure process code honors the cancel
flag/status.

1349-1350: ⚠️ Potential issue | 🟠 Major

Deleting the pending row still doesn’t stop an active cron upload.

A worker that already moved the row to processing can still write to the private/public paths after markAsDeleted() runs. Also ensure pending temp files are cleaned before removing rows.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Services/Model/Imp/PresentationService.php` around lines 1349 - 1350,
Before removing the pending DB row via
pending_media_upload_repository->deleteByMediaUpload($mediaUpload), first mark
the media upload as deleted/cancelled (call markAsDeleted($mediaUpload) or set
its status to a terminal state) so background cron workers will skip writing,
then remove any pending temp files/paths associated with the upload (delete
files from the private/public temp paths stored on $mediaUpload) and only after
both status is set and temp files cleaned call
pending_media_upload_repository->deleteByMediaUpload($mediaUpload).
🧹 Nitpick comments (1)
app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php (1)

89-94: Minor: guard against malformed JSON body.

json_decode(..., true) returns null on invalid JSON; the isset($data['access_token']) check handles this safely, but the error log ("response missing access_token") will be misleading for a parse failure. Consider distinguishing the two cases or using JSON_THROW_ON_ERROR to let the catch block log the actual decode error.

♻️ Proposed tweak
-            $data = json_decode($response->getBody()->getContents(), true);
-
-            if (!isset($data['access_token'])) {
-                Log::error('AutoRefreshingDropBoxTokenService: response missing access_token.');
+            $data = json_decode($response->getBody()->getContents(), true, 512, JSON_THROW_ON_ERROR);
+
+            if (!is_array($data) || !isset($data['access_token'])) {
+                Log::error('AutoRefreshingDropBoxTokenService: response missing access_token.');
                 return false;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php` around
lines 89 - 94, The log message in AutoRefreshingDropBoxTokenService currently
treats invalid JSON and a missing access_token the same; update the
token-refreshing method to detect JSON parse errors separately (e.g., use
json_decode with JSON_THROW_ON_ERROR inside a try/catch or check
json_last_error()) and log the decode exception or error message when parsing
fails, otherwise keep the existing "response missing access_token" log when
decoding succeeds but access_token is absent; ensure you reference the same
method where $data = json_decode($response->getBody()->getContents(), true) is
called so the catch/logging behavior is adjusted accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/Console/Commands/ReconcileMediaUploadsCommand.php`:
- Around line 69-86: The command currently uses intval() and empty() which can
coerce invalid CLI args (e.g., "abc" -> 0) or treat "0" as absent; update
ReconcileMediaUploadsCommand::handle to explicitly validate that $summit_id and
(if provided) $media_upload_type_id are positive integers before calling
$this->summit_service->reconcileMediaUploadsToPrivateStorage: parse the raw
arguments, ensure they match a positive-integer pattern (reject "0", non-digits,
negatives), and throw an \InvalidArgumentException with a clear message if
validation fails; only cast to int and pass values (or null for omitted optional
type) after validation so the service receives correct scopes.

In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1153-1154: The call and comment that delete pending rows for a
newly-created transient media upload should be removed: in addMediaUploadTo,
delete the line invoking
$this->pending_media_upload_repository->deleteByMediaUpload($mediaUpload) and
remove its preceding comment, since $mediaUpload is created by factory and has
no ID (transient) so the delete is a no-op; leave the rest of the method and any
subsequent flush/persist logic unchanged.

---

Duplicate comments:
In `@app/Services/Model/Imp/PresentationService.php`:
- Around line 1281-1294: The current code unconditionally calls
pending_media_upload_repository->deleteByMediaUpload(...) which can remove a row
that is STATUS_PROCESSING and let the cron still finish the old upload; instead,
stop deleting rows and implement a cooperative cancel or safe transition: modify
the pending-media logic around PendingMediaUpload to set a cancel flag or status
(e.g., setStatus(PendingMediaUpload::STATUS_CANCELLED) or setCancelled(true))
for existing pending rows unless their status is STATUS_PROCESSING, and update
the cron worker's processing transition to acquire a DB lock or check-for-cancel
before and during transfer (use row versioning/for-update or a status check
against STATUS_PROCESSING/STATUS_CANCELLED) so in-flight jobs are not raced by
new uploads; replace deleteByMediaUpload(...) invocation with an update that
only cancels non-processing rows and ensure process code honors the cancel
flag/status.
- Around line 1349-1350: Before removing the pending DB row via
pending_media_upload_repository->deleteByMediaUpload($mediaUpload), first mark
the media upload as deleted/cancelled (call markAsDeleted($mediaUpload) or set
its status to a terminal state) so background cron workers will skip writing,
then remove any pending temp files/paths associated with the upload (delete
files from the private/public temp paths stored on $mediaUpload) and only after
both status is set and temp files cleaned call
pending_media_upload_repository->deleteByMediaUpload($mediaUpload).

---

Nitpick comments:
In `@app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php`:
- Around line 89-94: The log message in AutoRefreshingDropBoxTokenService
currently treats invalid JSON and a missing access_token the same; update the
token-refreshing method to detect JSON parse errors separately (e.g., use
json_decode with JSON_THROW_ON_ERROR inside a try/catch or check
json_last_error()) and log the decode exception or error message when parsing
fails, otherwise keep the existing "response missing access_token" log when
decoding succeeds but access_token is absent; ensure you reference the same
method where $data = json_decode($response->getBody()->getContents(), true) is
called so the catch/logging behavior is adjusted accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4c5d374-5bbb-4c7d-943a-689d228f0090

📥 Commits

Reviewing files that changed from the base of the PR and between f8174ae and 6f0c60f.

📒 Files selected for processing (11)
  • app/Console/Commands/ProcessPendingMediaUploadsCommand.php
  • app/Console/Commands/ReconcileMediaUploadsCommand.php
  • app/Console/Kernel.php
  • app/Repositories/Summit/DoctrinePendingMediaUploadRepository.php
  • app/Services/FileSystem/Dropbox/AutoRefreshingDropBoxTokenService.php
  • app/Services/FileSystem/Dropbox/DropboxAdapter.php
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
  • app/Services/Model/Imp/PresentationService.php
  • app/Services/Model/Imp/SummitService.php
  • tests/Unit/Services/ProcessPendingMediaUploadsTest.php
  • tests/Unit/Services/RetryAfterDropboxClientTest.php
✅ Files skipped from review due to trivial changes (1)
  • app/Services/Model/Imp/SummitService.php
🚧 Files skipped from review as they are similar to previous changes (6)
  • app/Services/FileSystem/Dropbox/DropboxAdapter.php
  • app/Console/Kernel.php
  • app/Console/Commands/ProcessPendingMediaUploadsCommand.php
  • app/Services/FileSystem/Dropbox/RetryAfterDropboxClient.php
  • tests/Unit/Services/ProcessPendingMediaUploadsTest.php
  • tests/Unit/Services/RetryAfterDropboxClientTest.php

Comment thread app/Console/Commands/ReconcileMediaUploadsCommand.php
Comment thread app/Services/Model/Imp/PresentationService.php Outdated
@smarcet smarcet force-pushed the feature/cron-media-upload-processing branch from 6f0c60f to 4f07f71 Compare April 22, 2026 14:42
@github-actions
Copy link
Copy Markdown

📘 OpenAPI / Swagger preview

➡️ https://OpenStackweb.github.io/summit-api/openapi/pr-534/

This page is automatically updated on each push to this PR.

Copy link
Copy Markdown
Collaborator

@romanetar romanetar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@smarcet smarcet merged commit d09804a into main Apr 22, 2026
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants